AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Education Scenario Optimization

# Education Scenario Optimization

Chinese Text Correction 7b
Apache-2.0
Qwen2.5-7B-Instruct is a 7B-parameter-scale Chinese instruction fine-tuned large language model based on the Qwen2.5 architecture, suitable for text generation and reasoning tasks.
Large Language Model Transformers Chinese
C
shibing624
522
16
Chinese Text Correction 1.5b
Apache-2.0
Qwen2.5-1.5B-Instruct is a 1.5 billion parameter Chinese instruction fine-tuned model based on the Qwen2.5 architecture, suitable for text generation and reasoning tasks.
Large Language Model Transformers Chinese
C
shibing624
1,085
9
Mistral 7B Instruct Aya 101
Apache-2.0
A multilingual instruction-following model fine-tuned based on Mistral-7B-Instruct-v0.2, supporting 101 languages
Large Language Model Transformers Supports Multiple Languages
M
MaziyarPanahi
92
12
Cs224n Squad2.0 Distilbert Base Uncased
This model was established as a benchmark for the CS224n student project, based on the DistilBERT architecture and fine-tuned on the SQuAD2.0 dataset for question-answering tasks.
Question Answering System Transformers
C
elgeish
15
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase